skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Liu, Qiang"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Recent advances in connectomics, biophysics, and neuronal electrophysiology warrant modeling of neurons with further details in both network interaction and cellular dynamics. Such models may be referred to as ElectroPhysiome, as they incorporate the connectome and individual neuron electrophysiology to simulate neuronal activities. The nervous system ofCaenorhabditis elegansis considered a viable framework for such ElectroPhysiome studies due to advances in connectomics of its somatic nervous system and electrophysiological recordings of neuron responses. In order to achieve a simulated ElectroPhysiome, the set of parameters involved in modeling individual neurons needs to be estimated from electrophysiological recordings. Here, we address this challenge by developing a deep generative estimation method called ElectroPhysiomeGAN (EP-GAN), which, once trained, can instantly generate parameters associated with the Hodgkin–Huxley neuron model (HH-model) for multiple neurons with graded potential response. The method combines generative adversarial network (GAN) architecture with recurrent neural network encoder and can generate an extensive number of parameters (>170) given the neuron’s membrane potential responses and steady-state current profiles. We validate our method by estimating HH-model parameters for 200 simulated neurons with graded membrane potential followed by nine experimentally recorded neurons (where six of them are newly recorded) in the nervous system ofC. elegans. Comparison of EP-GAN with existing estimation methods shows EP-GAN's advantage in the accuracy of estimated parameters and inference speed for both small and large numbers of parameters being inferred. In addition, the architecture of EP-GAN permits input with arbitrary clamping protocols, allowing inference of parameters even when partial membrane potential and steady-state currents profiles are given as inputs. EP-GAN is designed to leverage the generative capability of GAN to align with the dynamical structure of the HH-model and thus is able to achieve such performance. 
    more » « less
  2. Open radio access networks (e.g., O-RAN) facilitate fine-grained control (e.g., near-RT RIC) in next-generation networks, necessitating advanced AI/ML techniques in handling online resource orchestration in real-time. However, existing approaches can hardly adapt to time-evolving network dynamics in network slicing, leading to significant online performance degradation. In this paper, we propose AdaSlicing, a new adaptive network slicing system, to online learn to orchestrate virtual resources while efficiently adapting to continual network dynamics. The AdaSlicing system includes a new soft-isolated RAN virtualization framework and a novel AdaOrch algorithm. We design the AdaOrch algorithm by integrating AI/ML techniques (i.e., Bayesian learning agents) and optimization methods (i.e., the ADMM coordinator). We design the soft-isolated RAN virtualization to improve the virtual resource utilization of slices while assuring the isolation among virtual resources at runtime. We implement AdaSlicing on an O-RAN compliant network testbed by using OpenAirInterface RAN, Open5GS Core, and FlexRIC near-RT RIC, with Ettus USRP B210 SDR. With extensive network experiments, we demonstrate that AdaSlicing substantially outperforms state-of-the-art works with 64.2% cost reduction and 45.5% normalized performance improvement, which verifies its high adaptability, scalability, and assurance. 
    more » « less
  3. Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains. 
    more » « less
  4. Recently, a wide range of memory-efficient LLM training algorithms have gained substantial popularity. These methods leverage the low-rank structure of gradients to project optimizer states into a subspace using a projection matrix found by singular value decomposition (SVD). However, convergence of these algorithms is highly dependent on the update rules of their projection matrix. This work provides the first convergence guarantee for arbitrary update rules of projection matrices, generally applicable to optimizers that can be analyzed with Hamiltonian Descent, including common ones such as LION and Adam. Inspired by this theoretical understanding, the authors propose Online Subspace Descent, a new family of subspace descent optimizers that do not rely on SVD. Instead of updating the projection matrix with eigenvectors, Online Subspace Descent updates it with online PCA. This approach is flexible and introduces minimal overhead to training. Experiments show that for pretraining LLaMA models ranging from 60M to 7B parameters on the C4 dataset, Online Subspace Descent achieves lower perplexity and better downstream task performance than state-of-the-art low-rank training methods across settings, narrowing the gap with full-rank baselines. 
    more » « less